234 research outputs found

    A lambda calculus for quantum computation with classical control

    Full text link
    The objective of this paper is to develop a functional programming language for quantum computers. We develop a lambda calculus for the classical control model, following the first author's work on quantum flow-charts. We define a call-by-value operational semantics, and we give a type system using affine intuitionistic linear logic. The main results of this paper are the safety properties of the language and the development of a type inference algorithm.Comment: 15 pages, submitted to TLCA'05. Note: this is basically the work done during the first author master, his thesis can be found on his webpage. Modifications: almost everything reformulated; recursion removed since the way it was stated didn't satisfy lemma 11; type inference algorithm added; example of an implementation of quantum teleportation adde

    ASMs and Operational Algorithmic Completeness of Lambda Calculus

    Get PDF
    We show that lambda calculus is a computation model which can step by step simulate any sequential deterministic algorithm for any computable function over integers or words or any datatype. More formally, given an algorithm above a family of computable functions (taken as primitive tools, i.e., kind of oracle functions for the algorithm), for every constant K big enough, each computation step of the algorithm can be simulated by exactly K successive reductions in a natural extension of lambda calculus with constants for functions in the above considered family. The proof is based on a fixed point technique in lambda calculus and on Gurevich sequential Thesis which allows to identify sequential deterministic algorithms with Abstract State Machines. This extends to algorithms for partial computable functions in such a way that finite computations ending with exceptions are associated to finite reductions leading to terms with a particular very simple feature.Comment: 37 page

    The graph rewriting calculus: confluence and expressiveness

    Get PDF
    Introduced at the end of the nineties, the Rewriting Calculus (rho-calculus, for short) is a simple calculus that uniformly integrates term-rewriting and lambda-calculus. The Rhog has been recently introduced as an extension of the rho-calculus, handling structures with cycles and sharing. The calculus over terms is naturally generalized by using unification constraints in addition to the standard rho-calculus matching constraints. This leads to a term-graph representation in an equational style where terms consist of unordered lists of equations. In this paper we show that the (linear) Rhog is confluent. The proof of this result is quite elaborated, due to the non-termination of the system and to the fact that we work on equivalence classes of terms. We also show that the Rhog can be seen as a generalization of first-order term-graph rewriting, in the sense that for any term-graph rewrite step a corresponding sequence of rewritings can be found in the Rhog

    A standardisation proof for algebraic pattern calculi

    Full text link
    This work gives some insights and results on standardisation for call-by-name pattern calculi. More precisely, we define standard reductions for a pattern calculus with constructor-based data terms and patterns. This notion is based on reduction steps that are needed to match an argument with respect to a given pattern. We prove the Standardisation Theorem by using the technique developed by Takahashi and Crary for lambda-calculus. The proof is based on the fact that any development can be specified as a sequence of head steps followed by internal reductions, i.e. reductions in which no head steps are involved.Comment: In Proceedings HOR 2010, arXiv:1102.346

    A Paraconsistent Higher Order Logic

    Full text link
    Classical logic predicts that everything (thus nothing useful at all) follows from inconsistency. A paraconsistent logic is a logic where an inconsistency does not lead to such an explosion, and since in practice consistency is difficult to achieve there are many potential applications of paraconsistent logics in knowledge-based systems, logical semantics of natural language, etc. Higher order logics have the advantages of being expressive and with several automated theorem provers available. Also the type system can be helpful. We present a concise description of a paraconsistent higher order logic with countable infinite indeterminacy, where each basic formula can get its own indeterminate truth value (or as we prefer: truth code). The meaning of the logical operators is new and rather different from traditional many-valued logics as well as from logics based on bilattices. The adequacy of the logic is examined by a case study in the domain of medicine. Thus we try to build a bridge between the HOL and MVL communities. A sequent calculus is proposed based on recent work by Muskens.Comment: Originally in the proceedings of PCL 2002, editors Hendrik Decker, Joergen Villadsen, Toshiharu Waragai (http://floc02.diku.dk/PCL/). Correcte

    Repetitive Reduction Patterns in Lambda Calculus with letrec (Work in Progress)

    Full text link
    For the lambda-calculus with letrec we develop an optimisation, which is based on the contraction of a certain class of 'future' (also: virtual) redexes. In the implementation of functional programming languages it is common practice to perform beta-reductions at compile time whenever possible in order to produce code that requires fewer reductions at run time. This is, however, in principle limited to redexes and created redexes that are 'visible' (in the sense that they can be contracted without the need for unsharing), and cannot generally be extended to redexes that are concealed by sharing constructs such as letrec. In the case of recursion, concealed redexes become visible only after unwindings during evaluation, and then have to be contracted time and again. We observe that in some cases such redexes exhibit a certain form of repetitive behaviour at run time. We describe an analysis for identifying binders that give rise to such repetitive reduction patterns, and eliminate them by a sort of predictive contraction. Thereby these binders are lifted out of recursive positions or eliminated altogether, as a result alleviating the amount of beta-reductions required for each recursive iteration. Both our analysis and simplification are suitable to be integrated into existing compilers for functional programming languages as an additional optimisation phase. With this work we hope to contribute to increasing the efficiency of executing programs written in such languages.Comment: In Proceedings TERMGRAPH 2011, arXiv:1102.226
    • …
    corecore